GBDT(Gradient Boosting Decision Tree) 是利用弱分類器迭代訓練來得到最佳的模型,而 LightGBM(Light Gradient Boosting Machine)是實現 GBDT 的演算法。
由於 XGBoost 在訓練的時候空間暫存大,而 LightGBM 優化了 XGBoost 訓練的缺陷,在同樣的條件下加速 GBDT 演算法的運算。LightGBM 修補了 GBDT 在巨量資料下會遇到記憶體限制、速度上的限制,更適合於實際面的應用。
Leaf-wise的優點是在分裂次數相同的情況下,可以降低更多的誤差以達到更好的準確度;但是葉子可能因此一直長下去導致過度擬合,這裡可以增加 max_depth 的數量限制防止過擬合。
pip install lightgbm
import lightgbm as lgb
from lightgbm import LGBMClassifier
classifier = lgb.LGBMClassifier(objective = 'binary',
learning_rate = 0.05,
n_estimators = 100,
random_state=0)
classifier.fit(X_train, y_train)
y_pred = classifier.predict(X_test)
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, y_pred)
print(cm)
>>> [[59 8]
[ 4 29]]
from sklearn.metrics import classification_report
print(classification_report(y_test, y_pred))
LGBM_grid_measure = measure_performance(X = X_test, y = y_test, clf = classifier, show_classification_report=True, show_confusion_matrix=True)
# feature importances
print('Feature importances:', list(classifier.feature_importances_))
# visualization
print('Plot feature importances...')
ax = lgb.plot_importance(classifier, max_num_features=len(train))
plt.show()
ax = lgb.plot_tree(classifier, figsize=(20, 8), show_info=['split_gain'])
plt.show()
我們發現基本上有 ensemble 的演算法準確度都相當地高,另外這份資料集剛好對 KNN 也有不錯的分類效果。
在機器學習中,本身的資料源和特徵資料處理其實是界定模型的準度上限(upper bound),不同的演算法只是去逼近這個上限。我們測試不同的演算法同時,我們可以進行不同參數的評估與計算準確率來逼近上限。
以下用 LightGBM 來示範 GridSearch(網格搜索法),可以輸入自己想要的參數組合,藉此找到最優的參數。
# 建立參數
param_grid = {
'num_leaves': [30, 40],
'feature_fraction': [0.2, 0.3],
'bagging_fraction': [0.6, 0.7],
'max_depth':[3, 5, 7],
'max_bin':[20],
'lambda_l1':[0.3, 0.6],
'lambda_l2':[0.08, 0.09],
'min_split_gain':[0.04, 0.05],
'min_child_weight':[7]
}
# 建立 LightGBM模型
classifier = lgb.LGBMClassifier(objective = 'binary',
learning_rate = 0.05,
n_estimators = 100,
random_state=0)
# GridSearchCV
from sklearn.model_selection import GridSearchCV
gridsearch = GridSearchCV(classifier, param_grid)
# Final Model
print('Start predicting...')
LGBM = lgb.LGBMClassifier(objective = 'binary',
learning_rate = 0.05,
n_estimators = 100,
random_state=0,
num_leaves = gridsearch.best_params_['num_leaves'],
feature_fraction = gridsearch.best_params_['feature_fraction'],
bagging_fraction = gridsearch.best_params_['bagging_fraction'],
max_depth = gridsearch.best_params_['max_depth'],
max_bin = gridsearch.best_params_['max_bin'],
lambda_l1 = gridsearch.best_params_['lambda_l1'],
lambda_l2 = gridsearch.best_params_['lambda_l2'],
min_split_gain = gridsearch.best_params_['min_split_gain'],
min_child_weight = gridsearch.best_params_['min_child_weight'])
%time LGBM_fit = LGBM.fit(X_train, y_train)
print('Predicting is over')
更詳細可以請參考